55 research outputs found
CryptoBap: A Binary Analysis Platform for Cryptographic Protocols
We introduce CryptoBap, a platform to verify weak secrecy and authentication
for the (ARMv8 and RISC-V) machine code of cryptographic protocols. We achieve
this by first transpiling the binary of protocols into an intermediate
representation and then performing a crypto-aware symbolic execution to
automatically extract a model of the protocol that represents all its execution
paths. Our symbolic execution resolves indirect jumps and supports bounded
loops using the loop-summarization technique, which we fully automate. The
extracted model is then translated into models amenable to automated
verification via ProVerif and CryptoVerif using a third-party toolchain. We
prove the soundness of the proposed approach and used CryptoBap to verify
multiple case studies ranging from toy examples to real-world protocols,
TinySSH, an implementation of SSH, and WireGuard, a modern VPN protocol
Serberus: Protecting Cryptographic Code from Spectres at Compile-Time
We present Serberus, the first comprehensive mitigation for hardening
constant-time (CT) code against Spectre attacks (involving the PHT, BTB, RSB,
STL and/or PSF speculation primitives) on existing hardware. Serberus is based
on three insights. First, some hardware control-flow integrity (CFI)
protections restrict transient control-flow to the extent that it may be
comprehensively considered by software analyses. Second, conformance to the
accepted CT code discipline permits two code patterns that are unsafe in the
post-Spectre era. Third, once these code patterns are addressed, all Spectre
leakage of secrets in CT programs can be attributed to one of four classes of
taint primitives--instructions that can transiently assign a secret value to a
publicly-typed register. We evaluate Serberus on cryptographic primitives in
the OpenSSL, Libsodium, and HACL* libraries. Serberus introduces 21.3% runtime
overhead on average, compared to 24.9% for the next closest state-of-the-art
software mitigation, which is less secure.Comment: Authors' version; to appear in the Proceedings of the IEEE Symposium
on Security and Privacy (S&P) 202
SEAL: Capability-Based Access Control for Data-Analytic Scenarios
Data science is the basis for various disciplines in the Big-Data era. Due to the high volume, velocity, and variety of big data, data owners often store their data in data servers. Past few years, many computation techniques have emerged to protect the security and privacy of such shared data while enabling analysis thereon. Hence, access-control systems must provide a fine-grained, multi-layer mechanism to protect data. However, the existing systems and frameworks fail to satisfy all these requirements and resolve the trust issue between data owners and analysts.
In this paper, we propose SEAL as a framework to protect the security and privacy of shared data. SEAL enables computations on shared data while they remain under the complete control of data owners through pre-defined policies. Our framework employs the capability-object model to define flexible access policies. SEAL's access-control system supports delegating and revoking access privileges and other access-control customizations. In addition, SEAL can assign security labels to privacy-sensitive data and track them to enable data owners to define where and when a data analyst can access their data. We demonstrate the practicability of our approach by presenting a prototype implementation of SEAL.
Furthermore, we display the flexibility of our framework by implementing multiple data-analytic scenarios, which cover different applications
NS-Raubgut und Restitution in Bibliotheken - Ausbildungsinhalte für Informationsfachleute
60 Jahre nach Ende des Zweiten Weltkrieges und der nationalsozialistischen Herrschaft befindet sich noch immer NS-Raubgut im Bestand deutscher Bibliotheken. In der vorliegenden Bachelorarbeit wird eine Übersicht zur Thematik „NS-Raubgut und Restitution in Bibliotheken“ erarbeitet. Dies geschieht mit dem Ziel, im weiteren Verlauf der Arbeit aufzuzeigen, wie diese Inhalte in die Ausbildung von Informationsfachleuten integriert werden können. Nach einer Einführung in den Themenkomplex erfolgt zunächst eine Statusermittlung des derzeitigen Umgangs mit der Thematik innerhalb der Ausbildung von Informationsfachleuten an deutschen Hochschulen. Darauf aufbauend werden Vorlesungsinhalte erarbeitet, die sich für die Vermittlung in der Ausbildung eignen. Die Betrachtung möglicher Vermittlungsformen ist ebenfalls enthalten
CryptoBap: A Binary Analysis Platform for Cryptographic Protocols
We introduce CryptoBap, a platform to verify weak secrecy and authentication for the (ARMv8 and RISC-V) machine code of cryptographic protocols. We achieve this by first transpiling the binary of protocols into an intermediate representation and then performing a crypto-aware symbolic execution to automatically extract a model of the protocol that represents all its execution paths. Our symbolic execution resolves indirect jumps and supports bounded loops using the loop-summarization technique, which we fully automate. The extracted model is then translated into models amenable to automated verification via ProVerif and CryptoVerif using a third-party toolchain. We prove the soundness of the proposed approach and used CryptoBap to verify multiple case studies ranging from toy examples to real-world protocols, TinySSH, an implementation of SSH, and WireGaurd, a modern VPN protocol
The investigation of relationship between binocular vision status and migraine headaches
زمینه و هدف: با توجه به نتایج ضد و نقیض در مورد ارتباط بین سردردهای میگرنی و اختلالات دید دوچشمی، این مطالعه با هدف تعیین ارتباط بین برخی از پارامترهای دید دو چشمی با میگرن انجام شد. روش بررسی: در این مطالعه مورد- شاهدی، 30 نفر بیمار میگرنی که معیارهای ورود به مطالعه را داشتند به همراه 30 نفر بدون سردرد میگرنی مورد مطالعه قرار گرفتند. ابتدا بیماران با استفاده از پرسشنامه استاندارد سردرد غربال شده و تایید نهایی توسط نورولوژیست انجام شد. عیوب انکساری به روش رتینوسکوپی تعیین و سپس نقطه نزدیک تقارب، تقارب پرشی، استریوپسیس، ذخایر فیوژنی، انحراف دور و نزدیک در دو گروه تعیین شد. داده ها با استفاده از آزمون های من- ویتنی و رگرسیون لجستیک تک متغیره و چندگانه تحلیل شد. یافته ها: میانگین استریوپسیس در گروه مورد و شاهد به ترتیب 82/33±17/154 و 1/26±0/49 ثانیه بر کمان بود و به ترتیب در گروه مورد و شاهد میانگین فوریای نزدیک 72/6±3/8 و 85/2±5/6، ذخایر فیوژنی مثبت دور 76/2± 7/10 و 07/5±07/6، ذخایر فیوژنی منفی دور 7/4±2/18 و 37/7±0/11، ذخایر فیوژنی مثبت نزدیک 49/4±5/16 و 72/8±97/13 و ذخایر فیوژنی منفی نزدیک 22/5±7/22 و 25/9±67/14 پریزیوم دیوپتر بود. متوسط متغیرهای فوق و تقارب پرشی در دو گروه تفاوت معنی داری داشت (05/0>P) ولی متوسط سایر متغیرها در دو گروه معنی دار نبود. در رگرسیون لجستیک چندگانه فقط تقارب پرشی، تقارب فیوژنی مثبت در دور و نزدیک در مدل باقی ماند (05/0
On Compositional Information Flow Aware Refinement
The concepts of information flow security and refinement are known to have had a troubled relationship ever since the seminal work of McLean. In this work we study refinements that
support changes in data representation and semantics, including the addition of state variables that may induce new observational power or side channels. We propose a new epistemic approach to ignorance-preserving refinement where an abstract model is used as a specification of a system’s permitted information flows, that may include the declassification of secret information. The core idea is to require that refinement steps must not induce observer knowledge that is not already available in the abstract model.
Our study is set in the context of a class of shared variable multi-agent models similar to interpreted systems in epistemic logic. We demonstrate the expressiveness of our framework through a series of small examples and compare our approach to existing, stricter notions of information-flow secure refinement based on bisimulations and noninterference preservation. Interestingly, noninterference preservation is not supported “out of the box” in our setting, because refinement steps may introduce new secrets that are independent of secrets already present at abstract level. To support verification, we first introduce a “cube-shaped” unwinding condition related to conditions recently studied in the context of value-dependent noninterference, kernel verification, and secure compilation. A fundamental problem with ignorance-preserving refinement, caused by the support for general data and observation refinement, is that sequential composability is lost. We propose a solution based on relational pre- and post-conditions and illustrate its use together with unwinding on the oblivious RAM construction of Chung and Pass
Axiomatic hardware-software contracts for security
We propose leakage containment models (LCMs)—novel axiomatic security contracts which support formally reasoning about the security guarantees of programs when they run on particular microarchitectures. Our core contribution is an axiomatic vocabulary for formalizing LCMs, derived from the established axiomatic vocabulary for formalizing processor memory consistency models. Using this vocabulary, we formalize microarchitectural leakage—focusing on leakage through hardware memory systems—so that it can be automatically detected in programs and provide a taxonomy for classifying said leakage by severity. To illustrate the efficacy of LCMs, we first demonstrate that our leakage definition faithfully captures a sampling of (transient and non-transient) microarchitectural attacks from the literature. Second, we develop a static analysis tool based on LCMs which automatically identifies Spectre vulnerabilities in programs and scales to analyze real-world crypto-libraries
- …